Sparse Cholesky Factorization by Kullback--Leibler Minimization

نویسندگان

چکیده

We propose to compute a sparse approximate inverse Cholesky factor $L$ of dense covariance matrix $\Theta$ by minimizing the Kullback--Leibler divergence between Gaussian distributions $\mathcal{N}(0, \Theta)$ and L^{-\top} L^{-1})$, subject sparsity constraint. Surprisingly, this problem has closed-form solution that can be computed efficiently, recovering popular Vecchia approximation in spatial statistics. Based on recent results factors obtained from pairwise evaluation Green's functions elliptic boundary-value problems at points $\{x_{i}\}_{1 \leq i N} \subset \mathbb{R}^{d}$, we an elimination ordering pattern allows us $\epsilon$-approximate such computational complexity $\mathcal{O}(N \log(N/\epsilon)^d)$ space \log(N/\epsilon)^{2d})$ time. To best our knowledge, is asymptotic for class problems. Furthermore, method embarrassingly parallel, automatically exploits low-dimensional structure data, perform Gaussian-process regression linear (in $N$) complexity. Motivated its optimality properties, applying joint training prediction regression, greatly improving stability cost. Finally, show how apply important setting processes with additive noise, compromising neither accuracy nor

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Sparse Non-negative Matrix Factorization with Generalized Kullback-Leibler Divergence

Non-negative Matrix Factorization (NMF), especially with sparseness constraints, plays a critically important role in data engineering and machine learning. Hoyer (2004) presented an algorithm to compute NMF with exact sparseness constraints. The exact sparseness constraints depends on a projection operator. In the present work, we first give a very simple counterexample, for which the projecti...

متن کامل

Kullback-Leibler Divergence for Nonnegative Matrix Factorization

The I-divergence or unnormalized generalization of KullbackLeibler (KL) divergence is commonly used in Nonnegative Matrix Factorization (NMF). This divergence has the drawback that its gradients with respect to the factorizing matrices depend heavily on the scales of the matrices, and learning the scales in gradient-descent optimization may require many iterations. This is often handled by expl...

متن کامل

Parallel Sparse Cholesky Factorization

Sparse matrix factorization plays an important role in many numerical algorithms. In this paper we describe a scalable parallel algorithm based on the Multifrontal Method. Computational experiments on a Parsytec CC system with 32 processors show that large sparse matrices can be factorized in only a few seconds.

متن کامل

Highly Parallel Sparse Cholesky Factorization

We develop and compare several fine-grained parallel algorithms to compute the Cholesky factorisation of a sparse matrix. Our experimental implementations are on the Connection Machine, a distributedmemory SIMD machine whose programming model conceptually supplies one processor per data element. In contrast to special-purpose algorithms in which the matrix structure conforms to the connection s...

متن کامل

Modifying a Sparse Cholesky Factorization

Given a sparse symmetric positive definite matrix AAT and an associated sparse Cholesky factorization LDLT or LLT, we develop sparse techniques for obtaining the new factorization associated with either adding a column to A or deleting a column from A. Our techniques are based on an analysis and manipulation of the underlying graph structure and on ideas of Gill et al. [Math. Comp., 28 (1974), ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: SIAM Journal on Scientific Computing

سال: 2021

ISSN: ['1095-7197', '1064-8275']

DOI: https://doi.org/10.1137/20m1336254